347 research outputs found
Leveraged buybacks
Debt-financed share buybacks generate positive short-term and long-run abnormal stock returns. Leveraged buyback firms have more debt capacity, higher marginal tax rate, lower excess cash and lower growth prospects ex ante, increase leverage and reduce investments more sharply ex post than cash-financed buyback firms. Firms that are over-levered ex-ante are associated with lower returns and real investments following leveraged buybacks. The lower announcement returns of over-levered firms are concentrated on firms with weaker corporate governance. The evidence is consistent with leveraged buybacks enabling firms to optimize their leverage, on average benefiting shareholders. The benefits decrease with a firm’s leverage ex ante
Anableps: Adapting Bitrate for Real-Time Communication Using VBR-encoded Video
Content providers increasingly replace traditional constant bitrate with
variable bitrate (VBR) encoding in real-time video communication systems for
better video quality. However, VBR encoding often leads to large and frequent
bitrate fluctuation, inevitably deteriorating the efficiency of existing
adaptive bitrate (ABR) methods. To tackle it, we propose the Anableps to
consider the network dynamics and VBR-encoding-induced video bitrate
fluctuations jointly for deploying the best ABR policy. With this aim, Anableps
uses sender-side information from the past to predict the video bitrate range
of upcoming frames. Such bitrate range is then combined with the receiver-side
observations to set the proper bitrate target for video encoding using a
reinforcement-learning-based ABR model. As revealed by extensive experiments on
a real-world trace-driven testbed, our Anableps outperforms the GCC with
significant improvement of quality of experience, e.g., 1.88x video quality,
57% less bitrate consumption, 85% less stalling, and 74% shorter interaction
delay.Comment: This paper will be presented at IEEE ICME 202
Geometry-Aware Video Quality Assessment for Dynamic Digital Human
Dynamic Digital Humans (DDHs) are 3D digital models that are animated using
predefined motions and are inevitably bothered by noise/shift during the
generation process and compression distortion during the transmission process,
which needs to be perceptually evaluated. Usually, DDHs are displayed as 2D
rendered animation videos and it is natural to adapt video quality assessment
(VQA) methods to DDH quality assessment (DDH-QA) tasks. However, the VQA
methods are highly dependent on viewpoints and less sensitive to geometry-based
distortions. Therefore, in this paper, we propose a novel no-reference (NR)
geometry-aware video quality assessment method for DDH-QA challenge. Geometry
characteristics are described by the statistical parameters estimated from the
DDHs' geometry attribute distributions. Spatial and temporal features are
acquired from the rendered videos. Finally, all kinds of features are
integrated and regressed into quality values. Experimental results show that
the proposed method achieves state-of-the-art performance on the DDH-QA
database
Perceptual Quality Assessment for Digital Human Heads
Digital humans are attracting more and more research interest during the last
decade, the generation, representation, rendering, and animation of which have
been put into large amounts of effort. However, the quality assessment of
digital humans has fallen behind. Therefore, to tackle the challenge of digital
human quality assessment issues, we propose the first large-scale quality
assessment database for three-dimensional (3D) scanned digital human heads
(DHHs). The constructed database consists of 55 reference DHHs and 1,540
distorted DHHs along with the subjective perceptual ratings. Then, a simple yet
effective full-reference (FR) projection-based method is proposed to evaluate
the visual quality of DHHs. The pretrained Swin Transformer tiny is employed
for hierarchical feature extraction and the multi-head attention module is
utilized for feature fusion. The experimental results reveal that the proposed
method exhibits state-of-the-art performance among the mainstream FR metrics,
which can provide an effective FR-IQA index for DHHs
Simple Baselines for Projection-based Full-reference and No-reference Point Cloud Quality Assessment
Point clouds are widely used in 3D content representation and have various
applications in multimedia. However, compression and simplification processes
inevitably result in the loss of quality-aware information under storage and
bandwidth constraints. Therefore, there is an increasing need for effective
methods to quantify the degree of distortion in point clouds. In this paper, we
propose simple baselines for projection-based point cloud quality assessment
(PCQA) to tackle this challenge. We use multi-projections obtained via a common
cube-like projection process from the point clouds for both full-reference (FR)
and no-reference (NR) PCQA tasks. Quality-aware features are extracted with
popular vision backbones. The FR quality representation is computed as the
similarity between the feature maps of reference and distorted projections
while the NR quality representation is obtained by simply squeezing the feature
maps of distorted projections with average pooling The corresponding quality
representations are regressed into visual quality scores by fully-connected
layers. Taking part in the ICIP 2023 PCVQA Challenge, we succeeded in achieving
the top spot in four out of the five competition tracks
Improving the Gilbert-Varshamov Bound by Graph Spectral Method
We improve Gilbert-Varshamov bound by graph spectral method. Gilbert graph
is a graph with all vectors in as vertices where
two vertices are adjacent if their Hamming distance is less than . In this
paper, we calculate the eigenvalues and eigenvectors of using the
properties of Cayley graph. The improved bound is associated with the minimum
eigenvalue of the graph. Finally we give an algorithm to calculate the bound
and linear codes which satisfy the bound
On the Weight Distribution of Weights Less than in Polar Codes
The number of low-weight codewords is critical to the performance of
error-correcting codes. In 1970, Kasami and Tokura characterized the codewords
of Reed-Muller (RM) codes whose weights are less than , where
represents the minimum weight. In this paper, we extend their
results to decreasing polar codes. We present the closed-form expressions for
the number of codewords in decreasing polar codes with weights less than
. Moreover, the proposed enumeration algorithm runs in polynomial
time with respect to the code length
- …